A First Order Free Lunch for SQRT-Lasso
نویسندگان
چکیده
Many statistical machine learning techniques sacrifice convenient computational structures to gain estimation robustness and modeling flexibility. In this paper, we study this fundamental tradeoff through a SQRT-Lasso problem for sparse linear regression and sparse precision matrix estimation in high dimensions. We explain how novel optimization techniques help address these computational challenges. Particularly, we propose a pathwise iterative smoothing shrinkage thresholding algorithm for solving the SQRT-Lasso optimization problem. We further provide a novel model-based perspective for analyzing the smoothing optimization framework, which allows us to establish a nearly linear convergence (R-linear convergence) guarantee for our proposed algorithm. This implies that solving the SQRT-Lasso optimization is almost as easy as solving the Lasso optimization. Moreover, we show that our proposed algorithm can also be applied to sparse precision matrix estimation, and enjoys good computational properties. Numerical experiments are provided to support our theory.
منابع مشابه
On the edge-connectivity of C_4-free graphs
Let $G$ be a connected graph of order $n$ and minimum degree $delta(G)$.The edge-connectivity $lambda(G)$ of $G$ is the minimum numberof edges whose removal renders $G$ disconnected. It is well-known that$lambda(G) leq delta(G)$,and if $lambda(G)=delta(G)$, then$G$ is said to be maximally edge-connected. A classical resultby Chartrand gives the sufficient condition $delta(G) geq frac{n-1}{2}$fo...
متن کاملSecond Order Moment Asymptotic Expansions for a Randomly Stopped and Standardized Sum
This paper establishes the first four moment expansions to the order o(a^−1) of S_{t_{a}}^{prime }/sqrt{t_{a}}, where S_{n}^{prime }=sum_{i=1}^{n}Y_{i} is a simple random walk with E(Yi) = 0, and ta is a stopping time given by t_{a}=inf left{ ngeq 1:n+S_{n}+zeta _{n}>aright} where S_{n}=sum_{i=1}^{n}X_{i} is another simple random walk with E(Xi) = 0, and {zeta _{n},ngeq 1} is a sequence of ran...
متن کاملNo Free Lunch for Noise Prediction
No-free-lunch theorems have shown that learning algorithms cannot be universally good. We show that no free funch exists for noise prediction as well. We show that when the noise is additive and the prior over target functions is uniform, a prior on the noise distribution cannot be updated, in the Bayesian sense, from any finite data set. We emphasize the importance of a prior over the target f...
متن کاملAn R Package flare for High Dimensional Linear Regression and Precision Matrix Estimation
This paper describes an R package named flare, which implements a family of new high dimensional regression methods (LAD Lasso, SQRT Lasso, `q Lasso, and Dantzig selector) and their extensions to sparse precision matrix estimation (TIGER and CLIME). These methods exploit different nonsmooth loss functions to gain modeling flexibility, estimation robustness, and tuning insensitiveness. The devel...
متن کاملThe flare package for high dimensional linear regression and precision matrix estimation in R
This paper describes an R package named flare, which implements a family of new high dimensional regression methods (LAD Lasso, SQRT Lasso, ℓ q Lasso, and Dantzig selector) and their extensions to sparse precision matrix estimation (TIGER and CLIME). These methods exploit different nonsmooth loss functions to gain modeling exibility, estimation robustness, and tuning insensitiveness. The develo...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/1605.07950 شماره
صفحات -
تاریخ انتشار 2016